Skip to content

Comments

[Bugfix][DO NOT MERGE] Free req.req_pool_idx when it is finished in the mixed_chunk mode to solve memory leak#10871

Closed
Muqi1029 wants to merge 2 commits intosgl-project:mainfrom
Muqi1029:bugfix/mixed_chunk
Closed

[Bugfix][DO NOT MERGE] Free req.req_pool_idx when it is finished in the mixed_chunk mode to solve memory leak#10871
Muqi1029 wants to merge 2 commits intosgl-project:mainfrom
Muqi1029:bugfix/mixed_chunk

Conversation

@Muqi1029
Copy link
Contributor

@Muqi1029 Muqi1029 commented Sep 24, 2025

Motivation

When using mixed chunking, a request may complete in the first prefill stage. In the original code, only out_cache_loc is freed, while req_pool_idx remains allocated. As a result, the scheduler’s check_memory function detects this as a memory leak and raises an error, which causes the entire SGLang Engine to fail and significantly impacts its stability.

Modifications

Free req_pool_idx in process_batch_result_prefill.

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Muqi1029, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical memory leak within the SGLang Engine's mixed chunking mode. Previously, when a request finished during the initial prefill stage, the req_pool_idx resource was not properly released, leading to memory exhaustion and system instability. The change introduces a specific call to free this resource, thereby preventing errors detected by the scheduler and enhancing the overall robustness of the engine.

Highlights

  • Memory Leak Fix: Addresses a critical memory leak that occurred in the mixed chunking mode when a request completed during the initial prefill stage, where the req_pool_idx was not being freed.
  • Resource Deallocation: Ensures proper resource management by explicitly freeing the req_pool_idx using self.req_to_token_pool.free(req.req_pool_idx) within the process_batch_result_prefill function.
  • System Stability: Prevents the SGLang Engine from failing due to memory exhaustion errors detected by the scheduler, significantly improving its stability.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a memory leak that occurs in mixed-chunk mode when a request finishes during the prefill stage. The original code failed to free the req_pool_idx, leading to a resource leak. The proposed change adds the necessary call to self.req_to_token_pool.free(req.req_pool_idx), which correctly releases the request slot and resolves the memory leak. The fix is targeted and appears correct.

@MichelleWu351
Copy link
Contributor

not work, still have memory leak

@Muqi1029
Copy link
Contributor Author

@MichelleWu351 Maybe the situation of memory leak is not the same as yours. I test it in the latest sglang, it still happens.

@Muqi1029 Muqi1029 changed the title [Bugfix] Free req.req_pool_idx when it is finished in the mixed_chunk mode to solve memory leak [Bugfix][DO NOT MERGE] Free req.req_pool_idx when it is finished in the mixed_chunk mode to solve memory leak Oct 25, 2025
@Muqi1029
Copy link
Contributor Author

I can easily reproduce this bug about memory leak:

Using the latest sglang:

Server

python -m sglang.launch_server --model-path Qwen/Qwen3-8B --port 8888 --enable-mixed-chunk

Client

import requests
import json

url = "http://127.0.0.1:8888/generate"

headers = {
    "Content-Type": "application/json",
    "Authorization": "Just Keep Me"
}

data = {
    "input_ids": [1] * 40957,
}

response = requests.post(url, headers=headers, json=data)

if response.status_code == 200:
    result = response.json()
    print(json.dumps(result, ensure_ascii=False, indent=2))
else:
    print(f"Error: {response.status_code}")
    print(response.text)

Result of server:

A bad request will make sglang engine fail to check memory leak, leading to the following error, making the whole sglang engine down:

[2025-10-25 07:57:04] The server is fired up and ready to roll!
[2025-10-25 07:57:12 TP0] Input length (40957 tokens) exceeds the maximum allowed length (40954 tokens). Use a shorter input or enable --allow-auto-truncate., self.rid='4781a1d1797f4ef78da4d4e4f3949168'
[2025-10-25 07:57:12 TP0] Prefill batch [10], #new-seq: 1, #new-token: 1, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-10-25 07:57:12] [http_server] Error: Input length (40957 tokens) exceeds the maximum allowed length (40954 tokens). Use a shorter input or enable --allow-auto-truncate.
[2025-10-25 07:57:12 TP0] Scheduler hit an exception: Traceback (most recent call last):
  File "/root/projects/dev_project/sglang/python/sglang/srt/managers/scheduler.py", line 2769, in run_scheduler_process
    scheduler.event_loop_overlap()
  File "/root/.envs/dev/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/projects/dev_project/sglang/python/sglang/srt/managers/scheduler.py", line 1018, in event_loop_overlap
    self.self_check_during_idle()
  File "/root/projects/dev_project/sglang/python/sglang/srt/managers/scheduler_runtime_checker_mixin.py", line 214, in self_check_during_idle
    self.check_memory()
  File "/root/projects/dev_project/sglang/python/sglang/srt/managers/scheduler_runtime_checker_mixin.py", line 149, in check_memory
    self._check_req_pool()
  File "/root/projects/dev_project/sglang/python/sglang/srt/managers/scheduler_runtime_checker_mixin.py", line 135, in _check_req_pool
    raise ValueError(msg)
ValueError: req_to_token_pool memory leak detected!available_size=4095, total_size=4096

Reason

A request with an input_ids length below the model configuration limit can bypass the length check in the HTTP server:

def _validate_one_request(
self, obj: Union[GenerateReqInput, EmbeddingReqInput], input_ids: List[int]
) -> None:
"""Validates that the input token count and the requested token count doesn't exceed the model's context length."""
# FIXME: unify the length validation logic with the one in the scheduler.
_max_req_len = self.context_len
input_token_num = len(input_ids) if input_ids is not None else 0
input_token_num += self.reserve_input_token_num
if input_token_num >= self.context_len:

However, it fails later in the engine:

# initialize before returning
self.init_req_max_new_tokens(req)
# Validate prompt length
error_msg = validate_input_length(
req,
self.max_req_input_len,
self.server_args.allow_auto_truncate,
)
if error_msg:
req.set_finish_with_abort(error_msg)
self._add_request_to_queue(req)
return

This causes the request to be marked as aborted. The aborted request is then added to the waiting_queue as if it were a normal one by setting its input_ids to [0]. As a result, its out_cache_loc and request slot are allocated but never released:

if self.is_mixed_chunk and self.enable_overlap and req.finished():
# Free the one delayed token for the mixed decode batch
j = len(batch.out_cache_loc) - len(batch.reqs) + i
self.token_to_kv_pool_allocator.free(batch.out_cache_loc[j : j + 1])
continue

Eventually, this leads to a check_memory failure, which brings down the entire engine and reduces SGLang’s robustness to malformed or edge-case requests.

@Muqi1029
Copy link
Contributor Author

DO NOT MERGE since it may be incompatible with retracted req, I am fixing it now

@hnyls2002
Copy link
Collaborator

Thanks for bringing this. I think this issue has been solved by #12312 and #12224.

In the previous implementation, we set abort and set the finish reason at the same time, which is not compatible with the regular finish check and the overlap scheduler.

But now, we set to_finish instead of finish_reason for those requests which are going to be aborted but still need to enter the event loop.

@hnyls2002 hnyls2002 closed this Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants